The rapid proliferation of misinformation and deepfake technology poses a critical challenge to the integrity of digital media. Centralized fact-checking mechanisms often suffer from scalability limits and opacity, making them sus- ceptible to bias and systemic failure. In this paper, we propose FNA.ai, a decentralized multimodal framework designed to detect realistically forged digital content and verify contextual claims. Our pipeline combines a ResNet50 Convolu- tional Neural Network (CNN) for video artifact analysis with NLP-based factual consistency checks for textual data. To establish a durable, decentralized audit trail of verification outcomes without storing personally identifiable information on-chain, we integrate the AI pipeline with off-chain storage (IPFS) and mint evaluation metadata as “trust badges” via smart contracts on the Polygon Amoy test network. We empirically evaluate our system based on the FakeAVCeleb dataset. Our vision engine achieves an observed comparative accuracy of 95.8% for video deepfake detection. Simulta- neously, the framework demonstrates near-real-time validation latency (?2.1 seconds) and negligible transaction costs on the blockchain. This confirms the practical feasibility of low-cost, decentralized media provenance at scale.
Introduction
The paper addresses the rapid spread of fake news and AI-generated media, which overwhelms traditional fact-checking systems and lacks transparency due to centralized verification. To solve this, the authors propose FNA.ai, a framework that combines deep learning and blockchain to enable scalable, transparent media verification.
FNA.ai uses a multimodal approach:
A ResNet50-based CNN analyzes video content to detect deepfakes.
An NLP pipeline summarizes and verifies text using transformer models and similarity matching against trusted news sources.
These outputs are combined into a final authenticity score.
To ensure trust and immutability, verification results are:
Stored off-chain on IPFS (for data storage),
Anchored on the Polygon blockchain, where NFTs act as “trust badges” containing verification metadata.
The system is implemented using TensorFlow, FastAPI, and GPU acceleration for real-time performance. Experiments on the FakeAVCeleb dataset show high accuracy (~95.8%) in deepfake detection, while blockchain integration demonstrates feasible latency and cost for near-real-time use.
Overall, the key contribution is a hybrid AI + Web3 system that not only detects fake content across media types but also provides a transparent, tamper-proof record of verification.
Conclusion
FNA.ai presents a transparent decentralized framework for global news verification. By harmonizing ResNet50 deepfake track- ing with the speed and environmental efficiency inherent to Polygon’s Proof-of-Stake architecture, our framework proposes a durable audit trail of verification outcomes. Empirical benchmarks reflect highly optimistic detection precision (95.8% accuracy) alongside negligible ledger caching costs (< $0.01 per transaction).
Future framework iterations will focus on combating adaptive Deepfake generation via adversarial training mechanisms in- jected into the model itself. Systematically, we aim to transition to processing complex Vision Transformers (ViTs) and formally implement explicit audio-extraction mapping directly natively on the FakeAVCeleb subset to utilize the complete multimodal spectrum.
References
[1] H. Allcott and M. Gentzkow, “Social media and fake news in the 2016 election,” Journal of Economic Perspectives, vol. 31, no. 1, pp. 211–236, 2017.
[2] I. Goodfellow et al., “Generative adversarial nets,” Advances in Neural Information Processing Systems, vol. 27, 2014. [Online]. Available: https://arxiv.org/abs/1406.2661
[3] J. Benet, “IPFS - Content Addressed, Versioned, P2P File System,” arXiv preprint arXiv:1407.3561, 2014.
[4] D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “MesoNet: a Compact Facial Video Forgery Detection Network,” 2018 IEEE International Workshop on Information Forensics and Security (WIFS), 2018. [Online]. Available: https: arxiv.org/abs/1809.00888
[5] A. Ro¨ssler et al., “FaceForensics++: Learning to Detect Manipulated Facial Images,” 2019 IEEE/CVF International Confer- ence on Computer Vision (ICCV), 2019. [Online]. Available: https://arxiv.org/abs/1901.08971
[6] R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “Deepfakes and beyond: A Comprehensive Survey of Face Manipulation and Fake Detection,” Information Fusion, vol. 64, pp. 131–148, 2020.
[7] R. K. Kaliyar, A. Goswami, and P. Narang, “FakeBERT: Fake news detection in social media with a depth text representa- tion,” Expert Systems with Applications, vol. 165, p. 113965, 2021.
[8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[9] H. Khalid, S. Tariq, P. C. Kim, and S. S. Woo, “FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset,” Proc. Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2021.
[10] A. Qayyum, J. Qadir, M. U. Janjua, and F. Sher, “Using blockchain to rein in the new post-truth world and checkmate fake news,” IT Professional, vol. 21, no. 4, pp. 16–24, 2019.
[11] S. Vashishth and S. Kumar, “A framework for identifying fake news using blockchain and smart contracts,” International Journal of Information Management Data Insights, vol. 3, no. 1, p. 100164, 2023.
[12] H. R. Hasan and K. Salah, “Blockchain-based proof of delivery of physical assets with single and multiple transporters,” IEEE Access, vol. 6, pp. 46781–46793, 2018.